compute service
andrew-feldman-co-founder-ceo-of-cerebras-systems-interview-series
Andrew is co-founder and CEO of Cerebras Systems. He is an entrepreneur dedicated to pushing boundaries in the compute space. Prior to Cerebras, he co-founded and was CEO of SeaMicro, a pioneer of energy-efficient, high-bandwidth microservers. SeaMicro was acquired by AMD in 2012 for $357M. Before SeaMicro, Andrew was the Vice President of Product Management, Marketing and BD at Force10 Networks which was later sold to Dell Computing for $800M.
AWS re:Invent 2022 roundup: Data management, AI, compute take center stage
As businesses grapple with growing volumes of data collected and generated by a myriad of cloud-based applications, Amazon Web Services (AWS) unveiled a wide range of new applications and product enhancements this week at its annual re:Invent conference that are geared to optimize data analytics and governance, and bolster the computing infrastructure to do so. Over the last few days, the company launched new services and features across its storage, compute, analytics, machine learning, databases, and security services, and made its first foray into supply chain management. Here is a roundup of the major announcements, with links to articles containing more details about the updates. A major theme at re:Invent 2022 was Amazon's efforts to ease data management and analytics for enterprises, as the company announced a dozen updates to data services. The updates included the launch of two new capabilities--Amazon Aurora zero-ETL integration with Amazon Redshift and Amazon Redshift integration for Apache Spark--that it claims will make the extract, transform, load (ETL) process obsolete.
- Information Technology > Security & Privacy (1.00)
- Information Technology > Services (0.91)
Using container images to run TensorFlow models in AWS Lambda
TensorFlow is an open-source machine learning (ML) library widely used to develop neural networks and ML models. Those models are usually trained on multiple GPU instances to speed up training, resulting in expensive training time and model sizes up to a few gigabytes. After they're trained, these models are deployed in production to produce inferences. They can be synchronous, asynchronous, or batch-based workloads. Those endpoints need to be highly scalable and resilient in order to process from zero to millions of requests.
Using container images to run PyTorch models in AWS Lambda
PyTorch is an open-source machine learning (ML) library widely used to develop neural networks and ML models. Those models are usually trained on multiple GPU instances to speed up training, resulting in expensive training time and model sizes up to a few gigabytes. After they're trained, these models are deployed in production to produce inferences. They can be synchronous, asynchronous, or batch-based workloads. Those endpoints must be highly scalable and resilient in order to process from zero to millions of requests.
AWS re:Invent 2019 - Predictions And A Wishlist
With less than a week to go, the excitement and anticipation are building up for industry's largest cloud computing conference - AWS re:Invent. As an analyst, I have been attempting to predict the announcements from re:Invent (2018, 2017) with decent accuracy. But with each passing year, it's becoming increasingly tough to predict the year-end news from Vegas. Amazon is venturing into new areas that are least expected by the analysts, customers, and its competitors. AWS Ground Station is an example of how creative the teams at Amazon can get in conceiving new products and services.
- Information Technology > Services (0.49)
- Information Technology > Software (0.33)
AWS Adding Artificial Intelligence, Compute Services to Cloud Lineup
NEW YORK--Amazon is dealing with striking workers in Europe, site disruptions during its Prime Day sale event and protestors inside and out of the Javits Convention Center, site of this week's AWS NYC Summit 2018. None of which appeared to bother Amazon Web Services executives at the Summit, who announced new capabilities for its artificial intelligence machine learning and compute services on the AWS cloud. With artificial intelligence and machine learning services in demand, AWS rolled out improvements to its SageMaker service, which enables users to build and deploy models in the cloud. Dr. Matt Wood, AWS's General Manager for Machine Learning, announced two updates to the help speed up the service: SageMaker Streaming Algorithms and SageMaker Batch Transform. Streaming Algorithms enables users to stream large amounts of training data from the S3 storage service into SageMaker.
- North America > United States > New York (0.25)
- Europe (0.25)
- Information Technology > Services (0.37)
- Media (0.34)